Index
- Basics of video signals
- Video signal standards
- General video signal standards information
- Composite video formats
- S-video
- Component video formats
- Advanced video standards
- Program Delivery Control (PDC)
- Teletext
- Closed Captioning
- Data transmission inside TV signal
- V-Chip
- Time code
- Video signal distribution
- Video signal switching
- Video standards conversion
- Film to video conversion
- Other related information
Video Signal Standards and Conversion Page
- No three-primary system can ever cover the entire range of colors perceivable by the human eye (due to the nature and overall shape of the "color space" in which they must be represented).
- None of the three primaries used - either by the camera or in the CRT - is fully saturated (due technical/physical limitations), which additionally restricts the range of colors which can actually be reproduced correctly. Truly saturated purples and yellows are difficult if not impossible to produce in the typical RGB system.
- Both the image source (in this case, the camera) and the display (a CRT) may be "speaking" in terms of RGB values, but may not be using quite the same red, green, and blue or how much of each is supposed to be combined to make "white." This will alter the appearance of the color as displayed vs. what the camera "intended." The RGB values used are supposedly standardized by the broadcast television specifications, but there are still some differences (different broadcast specifications, own specifications in computer systems etc.)
- Basic Principles of Video - good introduction chapter from a book Rate this link
- Bandwidth Versus Video Resolution - Visual resolution in video systems is defined as the smallest detail that can be seen. This detail is related directly to the bandwidth of the signal: The more bandwidth in the signal, the more potential visual resolution. The converse is also true: The more the signal is band-limited, the less detail information will be visible. This article addresses this relationship and provides a straightforward way to determine the bandwidth required to achieve the desired visual resolution. Rate this link
- Fields: Why Video Is Crucially Different from Graphics Rate this link
- The Difference Between HDTV, EDTV, and SDTV - The consumer electronics industry has done a spectacular job spreading mass confusion about video. Time was when there was just TV. Now we've got SDTV, EDTV, HDTV, 480i, 480p, 525p, 720p, 1080i, progressive scan, component, composite, blah blah blah. Enough to make you feel like you need an engineering degree to buy a projector or TV. Rate this link
- The Video Food Chain - a table for the video food chain with RGBHV at the top and proceed down the list to composite video Rate this link
- Video Basics - This article covers many of the fundamentals of analog video. Rate this link
- Video Signal Basics Rate this link
Basics of video signals
Any TV signal is a complex combination of video as well as timing information. Video signals are complex waveforms comprised of signals representing a picture as well as the timing information needed to display the picture. You can think of an image as a two-dimensional array of intensity or color data. A camera, however, outputs a one-dimensional stream of analog or digital data. Standard analog video signals are designed to be broadcast and displayed on a television screen. To accomplish this, a scheme specifies how the incoming video signal gets converted to the individual pixel values of the display. An analog video signal consists of a low-voltage signal containing the intensity information for each line, in combination with timing information that ensures the display device remains synchronized with the signal. The signal for a single horizontal video line consists of a horizontal sync signal, back porch, active pixel region, and front porch. The horizontal sync (HSYNC) signals the beginning of each new video line. It is followed by a back porch, which is used as a reference level to remove any DC components from the floating (AC-coupled) video signal. This is accomplished during the clamping interval for monochrome signals, and takes place on the back porch. Color information can be included along with the monochrome video signal (NTSC and PAL are common standard formats). A composite color signal consists of the standard monochrome signal (RS-170 or CCIR) with color information added. Another aspect of the video signal is the vertical sync (VSYNC) pulse. This is actually a series of pulses that occur between fields to signal the monitor to peform a vertical retrace and prepare to scan the next field. There are several lines between each field which contain no active video information. Some contain only HSYNC pulses, while several others contain a series of equalizing and VSYNC pulses. These pulses were defined in the early days of broadcast television and have been part of the standard ever since, although newer hardware technology has eliminated the need for some of the extra pulses.
Most video sources you will see use interlaced video. For example the video signal used in analogue TV broadcasts, video on VCR tapes and video on many DVDs is interlaced video. Interlaced video sacrifices picture quality to reduce bandwidth demands. Interlacing is in fact a clever way to compress a movie when one cannot use digital compression methods. Interlacing reduces the bandwidth (= storage space nowadays) by half, without losing vertical resolution in quiet areas (in motion areas you don't notice very much anyway, because it's moving 50 times per second). So interlacing is a way to display the nonmoving parts with full resolution and the moving parts with half resolution, but fluidly. It's a very clever way to cut bandwidth without sacrificing much quality. Interlaced video allows quick enough screen refresh at reasonable bandwidth usage so that TV image does not flicker too much and motion is smooth. An interlaced video display system builds an image on the picture tube in two phases, known as "fields", consisting of even and odd horizontal lines. The complete image (a "frame") is created by scanning an electron beam horizontally across the screen, starting at the top and moving down after each horizontal scan until the bottom of the screen is reached, at which point the scan starts again at the top. On an interlaced display, even numbered scan lines are displayed in the first field and then odd numbered lines in the second field. For a given screen resolution, refresh rate (frames per second) and phosphor persistence, interlacing reduces flicker because the top and bottom of the screen are redrawn twice as often as if the scan simply proceded from top to bottom in a single vertical sweep. Analog camcorders, VCRs etc do not mix the recorded pictures. They record picture after picture after picture. Analog camcorders use "odd" and "even" sets of scan lines, too, but they don't intermix them into 1 frame. In a typical interlaced video signal from video camera the "odd" and "even" fields are taken at different times. Interlacing works well with traditional analogue televisions. Interlaging is annoying when video signal needs to be processed with computer or displayed on non-interlaced display device. To display interlaced video on non-interlace display, you need to use a process called deinterlacing. There are several techniques to do deinterlacing, but none of them is perfect. On a computer screen interlaced recordings are annoying to watch because the lines are really disturbing. Especially in scenes where there's movement from left to right (right to left) you see the interlacing.
To capture and use those complex signals, you need special electronics to do the job.Your TV receiver sorts out this information by sampling the level (in % of modulation) of the complex signal and displaying a picture on the screen in relation to this signal. Composite video signal is the signal that contains the same information but is not modulated to a RF carrier. There are also many other video signal formats in use in various applications.Typical video signals you see nowadays are analogue video signals.Analog refers to changing the original signal acquired (in a camera) into something that represents the signa - in this case, into a wave form transfered through video cable or other transmission medium (like through air in TV broadcasts).There are also digital video signals where the picture contents are encoded in digital format (information converted to a series of bits which represent numbers).The video systems are genrally based on RGB color space system. Both the image source (in this case, the camera) and thedisplay (a CRT) are "speaking" in terms of RGB. In the way from image source to the display device the signal is converted in many cases to some other formats that are more convient to transport than three separate RGB signals (for example composite video signal tran goes through one coax cable). The RGB system is not perfect system. Accurate reproduction of color imagery via electronic means is atopic that can fill a book, but in simplified terms there are those major problems:
- RGB video is the highest quality video used in professional A/V presentation industry and computer video. It has one wire for each colour, usually with it's own RF sheilding to reduce any interference and any subsequent quality degredation. Nothing is better. The standard signal interface level for RGB interface is 0.7Vpp to 75 ohm load. How the sync information is transferred varies from RGB interface application to another (possibilities are sync-on-green, separate composite sync and sepatate HSYNC + VSYNC signals).
- Component Video is a bit of a misnomer - RGB is technically also component video, or video whose components are transmitted seperately. Usually, when someone refers to Component Video however they're referring to Colour-Difference component video. In color difference component video the picture is transported in luminance plus two color difference signals format. Color difference component video [YUV or YCrCb] is the highest quality form of video typically used in TV broadcasting industry. All signal components are used to use 75 ohm coaxial line. The Y component has amplitude of around 1Vpp and other components somewhat less. There are different names used for compoent formats, and they correspond each other in the following way: YCrCb = YPrPb = YUV = Y, R-Y, B-Y
- In S-video format RGB components are combined into two wires, Luma and Chroma (Y + C), or Brightness and Colour. Again including their own shielding to prevent interference. The Y signal has a nominal level of 1Vpp and C signal a level of around 0.5V. Both use 75 ohm terminated coaxial lines as the medium.
- Composite video (sometimes refeereed only video in connector name) uses one wire (with it's own shielding) to carry all video information (red, blue, green and sync) mixed together. This is generally a pretty good picture, but depends greatly on the quality of the generating & receiving equipment. This format is quite often referred as PAL video or NTSC video depending on what video format is used. The nominal signal level is 1Vpp on a 75 ohm terminated line.
- RF video format goes into the cable plug on the back of your TV. This is one wire, shielded, carrying not only the NTSC or PAL video information, but also the sound information as well. In the case of the cable coming out of you wall, this one wire contains many (In some cases hundreds) channels. Unfortunately in real-life situations those many channels and the soudn with video can interfere with each other and cause picture quality to degrade. The antenna networks try to give you a signal level of 60..80 dBuV (1..10mV) for your TV to be happy with the signal.
- NTSC: USA uses 60Hz power, so the NTSC field rate is 59.94 Hz. The picture frame has 525 lines. The color carrier for NTSC is 3.58 MHz, for short. For NTSC, the two color signals are modulated on the color carrier using quadrature AM modulation. This modulation is problematic, because transmission impairments like severe phase changes can cause color problems (sets need HUE or TINT control).
- PAL: Europe used 50Hz power, so the field rate is also 50 Hz. The picure frame has 625 lines. The color information is transmitted on the 4.43 MHz color carrier with about 1.4 MHz bandwidth. PAL system uses a color special modified quadrature AM modulation with special subcarrier phase shifts between picture lines. This allows the decoder to combine the color information of two pictures, using a delay line in the TV set. This allows any phase error can be cancelled so that severe phase changes in the transmission of a PAL signal will show up as weak colors, but correct colors. There are several variations of PAL system in use. Common types are B, G and H; less common types include D, I, K, N and M. The different types are generally not compatible on the TV broadcasting level (the RF signals you pick up on antenna), but most versions are compatible as composite video signal. Pal-B, G, H, I and D as far as the actual video is concerned, are all the same format. All use the 625/50 line/field rate, scan at 15,625 h-lines/sec and use a 4.433618 color subcarrier frequency. The only difference is in how the signal is modulated for broadcast. Thus the B, G, H, I & D designate broadcast variations (different luminance bandwidth and different audio sibcarrier frequencies) as opposed to any variation of the video format.PAL-I for example, has been allocated a wider bandwidth than PAL-B, necessitating that the sound carrier is placed 6Mhz above the picture instead of 5.5 MHz above the picture carrier. PAL-M and PAL-N are considerably different from other versions, as the line/field rate and color subcarrier frequencies are different from standard PAL. PAL system was originally developed by Walter Bruch at Telefunken Germany (German State Television) and is used in much of western Europe, Asia, throughout the Pacific and southern Africa.
- SECAM: SECAM was developed in France and is used in France and it's territories, much of Eastern Europe, the Middle East and northern Africa. This system uses the same resolution of PAL, 625 lines, and frame rate, 25 per second, but the way SECAM processes the color information is unique. SECAM was not developed for any technical reason of merit but was mainly invoked as a political statement, as well as to protect the French manufacturers from stiff foreign competition. The Eastern Block countries during the cold war adopted variations of SECAM simply because it was incompatible with everything else. For picture scanning and format, SECAM is same as PAL. SECAM is a totally different system form PAL and NTSC when color transmission is concerned. SECAM uses FM modulation on color subcarrier to send one color component at the time. One line transmits the R-Y signal, and the next line transmits the B-Y signal. With delay lines those color signal sent at different times can be combined to decode the color picture. SECAM has some problems, however. You cannot mix together two SECAM video signals, which is possible for two locked NTSC or PAL signals. Most SECAM TV studios use PAL equipment, and the signal is converted to SECAM before it goes on the air. Also, the color noise is higher in SECAM. Recording the SECAM signal to video tape is hard (give easily poor picture quality). If that wasn't bad enough, there are other variations of SECAM: SECAM-L (also known as French SECAM) used in France and its' now former territories, MESECAM and SECAM-D which is used primarily in the C.I.S. and the former Eastern Block countries. Naturally, none of the three variations are compatible with even one another.
- ATSC: Advanced Television Systems Committee (ATSC) is a committee which spcifies the digital TV broadcasting system in use in USA. This standard support both standard definitoon and HDTV broadcasts. There are 18 approved formats for digital TV broadcasts, those format cover both SD (640x480 and 704x480 at 24p, 30p, 60p, 60i) and HD (1280x720 at 24p, 20p, and 60p; 1920x1080 at 24p, 30p and 60i).
- CIF: Common Interface Format. This video format was developed to easily allow video phone calls be-tween countries. The CIF format has a resolution of 332 x 288 active pixels and a refresh rate of 29.97 frames per second.
- QCIF Quarter Common Interface Format. A video format to allow the implementation of cheaper video phones. It has a resolution of 176 x 144 active pixels and a refresh rate of 29.97 frames per second.
- CCIR: The CCIR is a standards body that originally defined the 625 line 25 frames per second TV standard used in many parts of the world. The CCIR standard defines only the monochrome picture component, and there are two major colour encoding techniques used with it, PAL and SECAM.
- CCIR video: Video signal with same timings as PAL and SECAM systems use in Europe (50 fields per second, 625 lines per frame).
- HDTV: HDTV (high-definition TV) encompasses both analog and digital televisions that have a 16:9 aspect ratio and approximately 5 times the resolution of standard TV (double vertical, double horizontal, wider aspect). High definition is generally defined as any video signal that is at least twice the quality of the current 480i (interlaced) analog broadcast signal. Generally video formats 720p and 1080i are proper definition of the term HDTV.
- ITU-R BT.BT.470 Conventional television systems characteristics
- ITU-R BT.601 Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios
- BT.656-4 Interfaces for digital component video signals in 525-line and 625-line television systems operating at the 4:2:2 level of Recommendation ITU-R BT.601 (Part A)
- ITU-R BT.709 Video color space standard (old standard)
- ITU-R BT.804 Characteristics of TV receivers essential for frequency planning with PAL/SECAM/NTSC television systems
- RS-170: RS 170(A) Standard that was used for black and white TV. It defines voltage levels, blanking times, width of the sync pulses, etc. The specification spells out everything required for a receiver to display a mono-chrome picture. Example: the output of black and white security cameras conform to RS 170 specification. RS 170 (A) is the same specification as for color TV but without the color components. When NTSC decided on the color broadcast standard, they modified RS 170 slightly so that color could be added, with the result called RS 170 A.
- RS-170 RGB: Refers to RGB signals timed to RS-170 specifications.
- RS-330: A standard recommended by EIA for signals generated by closed-circuit TV cameras scanned at 525/60 and interlaced 2:1. The standard is more or less similar to RS-170, but H-sync pulses are absent during V-sync. Equalizing pulses are not required and may be added optionally during the V-blanking interval. This standard is also used for some color television studio electrical signals.
- RS-343: RS 343 Standard or specification for video. RS 343 is used for high resolution video (workstations) while RS 170 A is for lower resolution video. RS-343 was introduced later than RS-170 and intended, according to the title, as a signal standard for "high-definition closed-circuit television". RS-343 specifies a 60 Hz non-interlaced scan with a composite sync signal with timings that produce a non-interlace (progressive) scan at 675 to 1023 lines. This standard is used by some computer systems and high resolution video cameras.
- RS-343A: EIA standards for high resolution monochrome CCTV. Based on RS-343.
- VGA: VGA (Video Graphics Array) originates from 640x480 color grpahics adapter used in first IBM PS/2 computers. There never really was an official standard for VGA video, but it was used as a loosely fedined "nearly industry standard" for many makers of grpahics card and display devices. VGA uses RGG signals and separate sync signals (HSYNC). VGA is quite closely related to 343 in many details, but the difference is that VGA uses non-interlaced picure.
- SVGA: SVGA (Super VGA) is extension to normal VGA system. The term has bee used in different places to mean different kind of extentions with higher resoluons. Generally SVGA is used toi refer to 800x600 resolution computer video signal.
- XGA: XGA (Extended Graphics Adapter) is IBM high resolution graphics card. This term is generally used to indicate 1024x768 resolution?computer video signal.
- SXGA: SXGA (Super XGA) is used to refer to resolutions that are higher than XGA. SXGA is used often to refer to resolutions like 1280x1024 and 1400x1050 pixels.
- UXGA: UXGA (Ultra XGA) is used to refer to 1600x1200 pixels resolution.
- A crash course in color conversion - Perceived television-image quality depends on source-material characteristics, on how that material gets transported to the reception system, and on the capabilities of the circuits in that reception system. Rate this link
- A Simplified Guide to the NTSC Video Signal - This page describes the basics of the television signal, how to analyze it on a Waveform Monitor or Vectorscope and how to set up a monitor to color bars. Rate this link
- Characteristics of B,G/PAL and M/NTSC television systems - information from Recommendation ITU-R BT.470-5 CONVENTIONAL TELEVISION SYSTEMS Rate this link
- Digital video? Don't forget the analog - The field of television hasn't undergone so many profound changes in the last 40 years as we have experienced in recent years, digital technology is coming but analogue video will stay with us for long time still Rate this link
- Home Video Systems Visual Resolution Comparison - The intent of this document is to display graphically the difference of luminance resolution of several home video systems. This document covers luminance resolutions of several video systems because that is the single most interesting aspect in a video system, and because that is the easiest to simulate with a computer program. Rate this link
- ITU BT Series Recommendations - Broadcasting service (television) - free standards list, standards text itself costs money to order Rate this link
- List of ITU-R Recommendations BT Series Rate this link
- National Television Systems Committee Video Display Signal IO - NTSC video timing and waveforms. Rate this link
- PAL, NTSC and SECAM Comparisons Rate this link
- PAL, NTSC and SECAM Comparisons - There are 3 main standards in use around the world: PAL, NTSC and SECAM. Each one is incompatible with the other. For example, a recording made in the France could not be played on an American VCR. Rate this link
- The Engineer's Guide to Decoding & Encoding - The subject of video signal encoding and decoding has become increasingly important with trend towards the use of component video technology in video production systems. The handbook treats the entire subject of encoding and decoding of video signal from the first principles leading up to today's most sophisticated technology. Rate this link
- TV Systems: A Comparison Rate this link
- Video Cabling Standards - A question that frequently arises amongst DVD-watchers is what effect does connecting the DVD player to the TV by different cabling systems have on the picture. This article describes different video signal transmission formats and their effect to picture quality. Rate this link
- Video Primer! Why Bother With RGB?? Rate this link
- Worldwide TV Standards - A Web Guide - documents about TV and video standards Rate this link
- World-Wide T.V. Standards Rate this link
- Characteristics of B,G/PAL and M/NTSC television systems - information from Recommendation ITU-R BT.470-5 CONVENTIONAL TELEVISION SYSTEMS Rate this link
- Concerning "legal" and "valid" video signals - there is confusion in the video industry concerning the terms legal, illegal, valid and invalid Rate this link
- Info on the three color TV systems - short description of NTSC, PAL and SECAM Rate this link
- National Television System Committee - introduction to NTSC video standard with circuit examples Rate this link
- NTSC four-field sequence Rate this link
- NTSC/PAL Reference - basic reference to NTSC/PAL test signals, measurements and standards in Windows (3.1 or 95) help format Rate this link
- RS-170 video signal - document also mentions RS-170A, RS-170 RGB, RS-330, RS-343, RS-343A and CCIR formats Rate this link
- RS-170A Sync Waveforms - details on NTSC standard Rate this link
- RS-170 Video Signal Formats Rate this link
- Short description of PAL timing info Rate this link
- Intermediate NTSC Video Testing - Nonlinear Distortions - This document has information on luminance nonlinearity and differential gain Rate this link
- Comb Filters NTSC Decoding Basics Article Series Rate this link
- Decoders: As Easy As R, G, B - how NTSC composite video is converted back to RGB Rate this link
- NTSC Decoding Basics (Part 1): Introduction Rate this link
- NTSC Decoding Basics (Part 2): Notch Filter Decoders Rate this link
- NTSC Decoding Basics (part 3): Line Comb Filter Decoders Rate this link
- NTSC Decoding Basics (Part 4): Adaptive Comb Filter Decoders Rate this link
- Video Decoding - The function of a video decoder is to separate the luminance and chrominance information contained in a composite video signal, there are several ways to achieve this, each with its own advantages Rate this link
- Taming the Composite, S-Video, Component and RGB Jungle - information on different video interfaces used in DVD players Rate this link
- Component Video Sync Formats Rate this link
- Consumer Analog RGB and YUV Video Formats Rate this link
- Keeping in Sync - why there are many different sync standard and what is the best one Rate this link
- US HDTV Standard - ATSC Digital Television Standard Rate this link
- Video Module Interface - developed in cooperation with many multimedia IC vendors in order to standardize the video interfaces between video chips Rate this link
- Widescreen signalling - in pdf format, covers PAL, NTSC and SECAM systems Rate this link
- Widescreen Signalling (WSS) - To facilitate the handling of various aspect ratios of program material received by TVs, a widescreen signalling (WSS) system has been developed. This standard allows a WSS enhanced 16:9 TV to display programs in their correct aspect ratio automatically. Rate this link
- PDC Live - Programme Delivery Control information home page Rate this link
- PDC-OnAir YLE TV 1 - what PDC data is sent out just now with description (in Finnish) Rate this link
- Normal VHS tape recorder does not have sufficient bandwidth to record teletext signals
- Decent S-VHS recorders have sufficient bandwidth (just) to record the teletext data stream in S-VHS mode
- Several VHS models are available to record Decoded subtitles 'burnt in'to the picture
- DV recorders (and presumably Digital 8) WON'T record the vertical interval of video signal that has the teletext information
- DVD+RW recorders should have sufficient bandwidth, and digital stability on playback to allow the data stream to be recorded, PROVIDED they aren't stripping off the vertical interval
- MRG Systems Teletext tutorial Rate this link
- Teletext Transmission Details Rate this link
- VBI Data Capacity with NABTS and Equivalent RS-232 Data Throughput Using Norpak Forward Error Correction - This application nore provides a summary of the data capacity of a VBI line using the North American Basic Teletext Specification (NABTS) in a 525 line system and converts this to the equivalent asynchronous RS-232 input capacity for each of the three data modes. Rate this link
- What is Teletext Rate this link
- VBI Data Capacity with NABTS and Equivalent RS-232 Data Throughput Using Norpak Forward Error Correction - This application nore provides a summary of the data capacity of a VBI line using the North American Basic Teletext Specification (NABTS) in a 525 line system and converts this to the equivalent asynchronous RS-232 input capacity for each of the three data modes. Rate this link
- V-Chip Homepage - the FCC adopted rules requiring all television sets with picture screens 33 centimeters (13 inches) or larger to be equipped with features to block the display of television programming based upon its rating Rate this link
- Intro to Timecode & Practices for Recording Source Tapes - One of the wonderful things about professional video equipment is that every field of every frame on a videotape has a unique address. The consequences of every frame being permanently labeled are enormous. It makes it possible to eject a tape from it's player and reload it later, and still be able to find exactly the same frame as before. Rate this link
- SMPTE timecode 80-bit data format - SMPTE timecode is based on a bit rate of 2400 bits per second, which provides a data format of 80 bits per frame (in a 30-frame per second system). This way it can be read on variable speed from one-fifth up to 20 times normal speed. Longitudinal Timecode (called LTC) is recorded on an audio track. Vertical Interval Timecode (called VITC) is recorded into the vertical synch period of the video frame signal. This way it can be read even with the video tape player in Pause. Rate this link
- Synchronisation and SMPTE timecode - mainly technical description of the SMPTE time-code synchronization system, especially LTC time code Rate this link
- Technical Introduction to Timecode - This note excerpted from Rate this link
- Time Code Primer - introduction to time code and it's use Rate this link
- Using SMPTE time code Rate this link
Video signal standards
There are many different video formats and interfcae in use in video systems. They are used in different applications for various technical and economical reasons. All the interfaces below are designed to use 75 ohm coaxial cables (one coaxial cable or more per interface, typical impedance is 75 ohm ? 10%) . Standard Video signal output level is 1 Vpp ? 10% into 75 ohms. The following signal level applies to video signals like composite video signal that has sync information in it. The video inputs are generally designed to get this specified level input level at ? 3 dB or ? 6 dB accuracy (meaning 0.5-2V signal). The video signals that do not have sync signals (for example RGB component signals) use level of 0.7Vpp (same level as the picture part of normal video signal). Here is a short primer of the most commonly used signal interface types from the best to worst in picture quality:
In practival video world there are three color video encoding standards: PAL, NTSC and SECAM.With each of them you will get a color picture if the receivingequipment can decode that particular video format.If the receiving equipment can't decode the particular videostandard, most often the result is black and white picture(sometimes no usable picture at all), because the the black and whitepart of the picture is quite similar on those different videostandards but the color information is very diffent(if the TV can't find proper color information it willjust display the black and white part of the picture).There are also some other standard and de-facto standards that you might encounter in video world:
Here is an overview of different video display resolution standards and de-facto standards in use:
Computer Standard | Resolution |
---|---|
VGA | 640 x 480 (4:3) |
SVGA | 800 x 600 (4:3) |
XGA | 1024 x 768 (4:3) |
WXGA | 1280 x 768 (15:9) |
SXGA | 1280 x 1024 (5:4) |
SXGA+ | 1400 x 1050 (4:3) |
WSXGA | 1680 x 1050 (16:10) |
UXGA | 1600 x 1200 (4:3) |
UXGAW | 1900 x 1200 (1.58:1) |
QXGA | 2048 x 1536 (4:3) |
QVGA (quarter VGA) | 320 x 240 (4:3) |
Analogue TV Standard | Resolution |
PAL | 720 x 576 |
PAL VHS | 320 x 576 (approx.) |
NTSC | 640 x 482 |
NTSC VHS | 320 x 482 (approx.) |
Digital TV Standard | Resolution |
NTSC (preferred format) | 648 x 486 |
D-1 NTSC | 720 x 486 |
D-1 NTSC (square pixels) | 720 x 540 |
PAL | 720 x 486 |
D-1 PAL | 720 x 576 |
D-1 PAL (square pixels) | 768 x 576 |
HDTV | 1920 x 1080 |
Digital Film Standard | Resolution |
Academy standard | 2048 x 1536 |
General video signal standards information
Composite video formats
Composite video is a single video signal that includes all sync, blac&white and color information all in same signal. Composite video signal is the most often used video signal format in TV broadcasting industry and it is used anywhere from TV camesras to TV broadcasts upto consumer video equipment interconnections. Composite video signal is most often transported using one 75 ohm coaxial cable terminated with RCA or BNC connectors so that center pin carries the signal and the outer shield of the connector is signal ground. Professionals uses always 75 ohm cable for composit video. For consumer applications you can also see some other cables used for short cables(up to few meters), because in short cables the impedance mismatches do not generally have much effect on picture quality.The standard video signal amplitude for composite video is 1V p-p.An automatic gain circuit in many equipment guarantee that usually a properoperation is maintained even when the input signal varies between 0,5V and 2.0V p-p. Composite video signals have a number of unavoidable image problems because of inherent limitations of the color TV systems (PAL, NTSC and SECAM systems). The main problem is, once the colour (C) and the black and white (Y) information have been put together, they can no longer be perfectly separated due to fundamental design limitations of the color TV systems. In PAL composite video signal is modulated to 4.43 MHz subcarrier (actual frequency is 4.43361875 MHz + 5 Hz). In NTSC video system the color information is modulated to 3.58 MHz subcarrier (actual frequency is 3.579545 MHz + 10 Hz). The compostive video format isn't lossless (compared tooriginal RGB). This means that NTSC isn't 'lossless' nor is PAL 'lossless'. Limiting the bandwidths of picture signals (mostly color) in the encoding and the combfiltering needed to decode will causean encode/decode loss.
Basics of composite video
Converting composite video back to RGB
S-video
S-Video is one of the high quality methods of transmitting a television signal from a device such as a Camcorder, DVD, or a digital satellite receiver. S-video signal is also know with name Y/C-video. Sometimes you can also see name S-VHS-video used (use of this name is not recommended).In "S" video, the chroma and video are separated to eliminate noise and tocreate a higher bandwith for each. S-video (Y/C) uses two separate video signals. The luminance (Y)is the black & white portion, providing brightness information.The chrominance, or chroma (C) is the colour portion, providinghue and saturation information. Signal component separation prevents nasty things like color bleeding and dot crawl, and helps increase clarity and sharpness. S-Video is "essentially" the same as Chroma & Luma, Brightness & Color, or y/c. They all mean the same thing, in a vague sort of way. Don't get confused here if you see different names for this connection. S-Video appeared associated with the first S-VHS VCR systems.S-video was also used by small computer industry starting from late 1980s.This separate the color (c) and luminance (y) information and carry them through separate wires, a system that became known as YC and later S-Video. This made it reasonably easy to integrate into existing equipment and at the same time provide a distinct increase in picture quality. Since the color and luminance were carried in separate cables, it had the potential to eliminate both the cross color problem and the trap problem.While most equipment takes full advantage of the signal separation, not all equipment properly implements the two separate channels.S-Video has also become a popular standard for home use, especially with DVD players.Panasonic's version of S-Video (using the 4-pin mini-DIN connector) seems tobe a "de-facto" standard these days. This means that S-Video (also called Y/C or component video) is carried on cables that end in 4-pin Mini-DIN connectors (other connector can be also used like pair of BNC connector, SCART connector or 7-pin Mini-DIN on some computer graphics cards). Quite often you can see documents that compare composite video and S-video to each other. The problem with composite video vs. "S-Video" isn'tone of bandwidth loss. S-Video can provide a better imagesimply because the luminance and chrominance components ofthe TV signal (also known as "Y" and "C") are kept separateand SHOULD therefore never have a problem with mutualinterference, which results in such effects as the infamous "chromacrawl" problem. However, there is absolutely no reason to thinkthat an S-Video connection will provide a chrominance signalof greater bandwidth. It could, were such a signal available,but there's just no reason to expect that it will.
Component video formats
There are many component video formats in use in use. The most commonly used component video formats are RGB, YPbPr and YCbC. The RGB format is the basic format in which the signal is generated in the video camera. In other formats the Y component of this signal is the black and white information contained within the original RGB signal. The Pb and Pr signals are colour difference signals, which are mathematically derived from the original RGB signal. It is important to realize that what is commonly called "component video" (YPbP or YCbC) output and RGB video output are not the same and are not directly compatible with each other, however, they are easily converted either way, at least in theory.
Advanced video standards
Program Delivery Control (PDC)
PDC is an invention that enables you to set your video recorder to tape a programme, knowing that it will be recorded in full, even if the programme is shown later than advertised. Programme Delivery Control (PDC) is a system which permits simple programming and recording control of VCRs using Teletext technology. It promises simplified VCR programming (through information on teletext system) and programmes are recorded even if the broadcaster changes the transmission times due to over-runs, schedule changes, etc. Under PDC the VCR can be programmed to look out for and record certain types or categories of programme. In addition to on/off recording control information, data is also transmitted about programme categories and intended audiences. General categories include sport, music, leisure, etc. Intended audience data identifies different age groups, disabled people etc. For example it is possible to programme a VCR to record all programmes featuring rock music or athletics. For PDC to work, the broadcaster should be transmitting PDC data and the viewer should have a VCR capable of making and controlling PDC selections. Technically PDC is just extra data transmitted within teletext data stream. BT.809 is standard for programme delivery control (PDC) system for video recording.
Teletext
Teletext is a method to transfer text pages to television sets withing the TV broadcasts. Teletext system is in use in many European countries. A teletext service consists of a number of pages, each page consisting of a screen of information. These pages are transmitted one at a time utilising spare capacity in the television composite video signal. When the complete service has been transmitted, the cycle is repeated, although the broadcaster can choose to transmit some pages more frequently if required. A domestic television set equipped with a suitable teletext decoder can display any one of these pages at a time. The viewer selects the page for display by means of a remote handset. The service is one way; the user is unable to request a page directly and can only instruct the decoder to search for a particular page in the teletext data stream. There will usually be a delay before the requested page appears in the transmission cycle. When the page is detected, the decoder captures and displays the information contained in the page. Thus the more pages within the service, the longer the access item. For this reason, broadcasters usually adjust the size of their services to obtain a cycle time of around 30 seconds, and therefore an average access time of 15 seconds.A teletext service is divided into up to eight magazines; each magazine can contain up to 100 pages. Magazine 1 comprises page numbers 100 to 199, magazine 2, numbers 200 to etc. Each page also has an associated sub-page which can be used to extend the number of individual pages within each magazine.A teletext display consists of letters, numbers, symbols and simple graphic shapes. In addition, there are a number of control codes which allow selection of graphic or text colours and other display features known as attribute. The characters available for display in the teletext system are the letters A to Z, a to z, numerals 0 to 9, the common punctuation marks and symbols, including currency signs, accents and a whole range of other character sets.Other graphic shapes (termed block graphics) are used to create simple pictures. The history of teletext starts at early 1970's when British broadcasters investigated extending the use of the existing UHF TV channels to carry a variety of information. After discussions with industry a common teletext standard was agreed and following extensive trials a pilot teletext service was started in 1974. Teletext was introduced into the UK on a full commercial basis in 1976. Fastext, a means of reducing wait times for pages, was introduced in 1987 and helped to spread consumer acceptance. Teletext is now included as a standard feature on may European TV sets. The characters that make up the teletext page are transmitted in the Vertical Blanking Interval (VBI) of the television signal. Lines 6 to 22 in field 1 and 319 to 335 in field 2 are available to carry teletext data. Each character or control code is represented by a 7 bit code plus an error checking parity bit. If the teletext decoder detects a parity error the characters is not displayed. The data for one teletext display row together with addressing information is inserted in one VBI line. Since there are 24 display rows or packets per teletext page, it takes 24 data lines to transmit a teletext page. Bits are represented by a two level NRZ signal. Synchronisation information is included at the start of each packet to indicate bit and byte positions. ITU-R BT.653 (formerly know as CCIR 653) is a recommendation defines the various teletext standards used around the world. TV systems A, B, C, and D for both 525-line and 625-line TV systems are defined. The teletext system in USA is called NABTS (North American Broadcast TeletextSpecification). It is specified in in EIA-256/ITU-R BT.653. Notes on recording the teletest signal with video recording device:
Closed Captioning
Captions are text versions of the spoken word. Captions allow audio to be perceivable those who do not have access to audio (can't hear it). Though captioning is primarily intended for those who cannot receive the benefit of audio, it has been found to greatly help those that can hear the audio and those who may not be fluent in the language in which the audio is presented. Closed captions are the visible text for spoken audio transmitted invisibly within the TV signal in USA. Closed captions are captions that are hidden in the video signal, invisible without a special decoder. The place they are hidden is called line 21 of the vertical blanking interval (VBI). On newer televisions, closed captions can be accessed via the menu. The US captioning system is NOT incompatible with teletext. Captioning uses video line 21 to transport it's data. The method of encoding used in North America allows for two characters of information to be placed in each frame of video, and there are 30 frames in a second. This corresponds (roughly) to 60 characters per second, or about 600 words per minute. The character set was designed for the United States, and really has very little beyond basic letters, numbers, and symbols. Closed Captioning can be recorded to a VCR with video signal. Because of the slow data rate, Closed Captioning survives even on poor VHS recordings. Digital television broadcasts in USA have also closed captioning functionality, but it is implemented differently than in NTSC TV system. Digital Television Closed Captioning (DTVCC) (formerly known as Advanced Television Closed Captioning (ATVCC)) is the migration of the closed-captioning concepts and capabilities developed in the 1970's for NTSC television video signals to the high-definition television environment defined by the ATV Grand Alliance and standardized by the ATSC (Advanced Television Systems Committee). This new environment provides for larger screens, higher screen resolutions, enhanced closed captions, and higher transmission data rates for closed-captioning. The DTVCC specification is defined in the Electronic Industries Association publication EIA-708.
Data transmission inside TV signal
V-Chip
FCC adopted system in USA to to block the display of television programming based upon its rating. The V-Chip reads information encoded in the rated program and blocks programs from the set based upon the rating selected by the parent. The V-chip is a standard for placing program rating information on a television program, so that parents can choose to filter what their children see. This information is carried in field 2 of the caption area. The standard for television content advisories ("ratings") is EIA-744-A.
Time code
Time code is a time information included in the video signal or stored separately to a taped video signal. This time information is very useful in video editing and post-processing applications. When you require pieces of audio, video, or music technology equipment (eg. tape recorder and sequencer) to work together, you may need some means to make sure that they play in time with each other. This is called 'synchronisation' or 'synchronization', which gets shortened to 'sync' or even 'synch'. The SMPTE/EBU timecode standard defines the predominant internationally accepted standard for a sync tone and it allows devices to 'chase' or locate to a precise position. SMPTE timedoce comes in two versions: "LTC" timecode sync tone which can be recorded onto the audio track of a video tape or onto an audio tape and "VITC" time-code which is stored inside video signal. There are four standard frame-rate formats: The SMPTE frame-rate of thirty frames per second (fps) is often used for audio in America (for example Sony 1630 format for CD mastering). It has its origins in the obsolete American mono television standard. The American colour television standard has a slightly different frame-rate of about 29.97 fps. This is accommodated by the SMPTE format known as thirty Drop Frame and is required for video work in America, Japan and generally the 60 Hz (mains frequency), NTSC (television standard) world. The EBU (European Broadcasting Union) standard of 25 fps is used throughout Europe, Australia and wherever the mains frequency is 50 Hz and the colour TV system is PAL or SECAM. The remaining rate of 24 fps is required for film work.One of the wonderful things about professional video equipment is that every field of every frame on a videotape has a unique address indicated by time code. The address is recorded on a special track using SMPTE Timecode. This timecode track is in addition to the CTL (control) track, linear audio tracks, and helical-scan video track. The address is displayed in decimal format as HH:MM:SS;FF. The consequences of every frame being permanently labeled are enormous. It makes it possible to eject a tape from it's player and reload it later, and still be able to find exactly the same frame as before. Having the timecode "permanently" associated with the video means that frame-accurate "cue sheets" can be drawn up, so that the director or editor can find important points in the program just by seeking to a specified timecode number. Timecode thus allows editing sessions to be spread out over days or even weeks, with perfect confidence that any edit point can be precisely re-visited at any time. Commonly, SMPTE code would only be used where video is involved, and thevideo machine becomes the master with everything else slaving to that. In video editing the time code is usually adopted for equipment syncronizing and for finding the exact positions on the video tape (when video is recorded, time code is also recorded, so it can be used to find some specific positions again and again). In audio post-production, SMPTE has been adopted for machine synchronisation and as a reference of tape position. You record SMPTE timecode to a spare track on tape. Youfeed this (audio) signal into a box which converts it into Midi Time Code that many audio equipment use. There are also other time code systems in use in some special applications (for example MIDI time code, proprietary video time code systems etc.)
- Driving Video Lines - When does a trace or a wire become a transmission line? Bandwidth, characteristic impedance, ESD, and shoot-through considerations for selecting the proper video driver, receiver, mux-amp, or buffer. Rate this link
Video signal distribution
Generally majority of video interfaces are designed for point to point connection: you have signal source on the one end of the cable and signal receiver on the other end. Transmission lines have a characteristic impedance (Zo) with which they should be driven and terminated; and, in video, the most popular is the 75 ohm coaxial cable. Video signals are wide bandwidth signals that cover video covers six or more ocataves of frequency range. A tpyical video signal can start from few tens of Hz (even DC on some applications) and can extend easily to tens of MHz (up to 5-6 Mhz typically TV broadcast video). Video engineers must match impedances to avoid reflections when driving transmission lines. The video transmission lines are traditionally 75 ohm coaxial cables. Only dissipative elements (resistors) can be relied on for matching1 over such wide bandwidths. The use of resistors creates a loss. The driver must compensate with added gain. That's why most video drivers have a fixed gain of two2, though some are settable. The first information needed when designing or choosing a video driver is the bandwidth. Microscopically, video is a bit-stream, and the high-frequency end depends on the rise/fall time of the waveform. To reproduce the waveform with satisfactory fidelity, the upper -3db point should be between 0.35 to 0.50 over the rise/fall time of the video signal, thus putting the high end of the video bandpass in the tens or hundreds of MHz depending on the application. Macroscopically, video is an image, and to reproduce it we have to pass the rate at which it was sampled, or the frame rate. This sets the low end around 2.5 to 5Hz. AC coupling would require large capacitors, which is why most applications are DC coupled. What if we wanted to display the camera's signal on multiple monitors? One way is to loop or tee the signal through a monitor and connect the first monitor to a second monitor. Professional video equipment typically provides loop-thru connections that allow this.These connections may terminate the signal automatically when a connection is made. If not, the signal needs to be terminated manually. This is done by using either a switch near the connector or a special connector with a precision resistor inside, a 75-ohm termination.Only one termination should be placed on the video signal, and it should be at the end. Double-terminated signals look dark, while unterminated signals appear overly bright. Consumer video equipment normally terminates internally and does not allow for looping signals.It is perfectly acceptable to loop video signals as long as the overall cable length and the number of loop-thrus are not excessive. Five or fewer loop-thrus is generally acceptable.A better way to increase the number of signals is through a distribution amplifier (DA). DAs, available for most signal types, amplify the signal and provide typically four to eight outputs for each input. This allows you to feed the same signal up to eight picture monitors or other destinations with the signal from one camera. Looping an input signal through several DA inputs can quickly allow 50 to 100 outputs.Sometimes when video signal needs to be distributed to many receivers, RF distribution system is used. This works exactly like a common antenna system or cable TV. You feed the signal or several signals in from the start of the network, and this signal (or signals) can be received from all TVs connected to this antenna network. With a modulator, you can take a video and its associated audio and modulate them up to RF. Channel 3-4 modulators are usually easily available and allow the use of a television for monitoring both video and audio (most home VCRs have also modulator in them). Although this kind of RF system is not recommended in a studio environment, it can be quite handy while out on the road. When video lines get long, the cable length needs to be taken into consideration. A video able is a transmission line and a transmission-line's bandwidth depends on its length. For example, at 10MHz, 100ft (30 meters) of RG-59A has 1.1dB IL (insertion loss), 200ft (65 meters) has 2.2dB IL, and 300ft (100 meters) has 3.3dB. Depending on the length, NTSC or PAL video experiences little loss, but HDTV or SXGA video would be very much affected. To correct for this, the line is "equalized" to restore overall response to the necessary application bandwidth. The equalizer has an inverse-frequency characteristic, compared to that of the transmission line, to create a flat response at the end of the line. There are so called "cable compensating" video amplifiers that can do the cable compensation. Generally those amplifiers have some form of setting where the user sets the amount of compensation to add. The need of the compensation depends on the cable length and type of cable used (more lossy cable needs more compensation).
Video signal switching
Video signals can be switched with mechanical switches. With two cameras, the simplest method is to use an A-B switch. A-B switches have two inputs and one output. Camera A goes to one input, camera B to the other; connect the monitor to the output or common terminalWith simple (passive) switches, the picture will likely roll every time the feed is switched. Avoiding this requires a vertical interval switch. Switching video signals during the vertical interval keeps the switch out of the picture area and reduces the likelihood of a vertical roll. Eliminating the roll requires a vertical interval switch and that the signals be synchronized or genlocked. If the signals are not genlocked, it is not always possible to make the switch during the vertical intervals of both signals.
- 3D Video Standards Conversion - This paper discusses the conversion of 3D video between the three world video standards of NTSC, PAL and SECAM. An overview is given of the five main methods of achieving 3D with consumer video and the principles of video standards conversion are discussed. A solution for converting field-sequential 3D video between standards is presented and a number of other advantages which the system offers are discussed. There are for example goof figures on PAL to NTSC conversion ideas. Rate this link
- Building bridges between the video standards - format conversion, transcoding Rate this link
- Composite Video Separation Techniques - application note, 2 megabyte pdf file Rate this link
- Converging Computer and Television Image Portrayal - With the inevitable convergence of television and computer imaging formats, the traditionally separate approaches are now a source of incompatibility which threatens to hinder progress, to no one's benefit. Regrettably there are already signs of entrenched attitudes. Rate this link
- Line Doublers and Progressive Scan Rate this link
- Notes on Video Conversion - how to convert video signals from different sources Rate this link
- Notes on Video Conversion Rate this link
- Scan Conversion: Making the Video Connection Rate this link
- Scan Converters Buyers' Guide Rate this link
- Selecting a Scan Converter Rate this link
- Setting Up a Scan Conver - Properly setting up a scan converter can make all the difference in the world in respect to image quality. A little fine-tuning can make the difference between a poor image and a good image. To obtain the best image, certain adjustments should be made each time a scan converter is used in a new application. Rate this link
- Standards Conversion - basic PAL/NTSC conversion and how to cope with movement Rate this link
- Temporal Rate Conversion - necessary conversion when you display video-originated material on desktop PC CRT display which uses different screen refresh rate than original video signal Rate this link
- The Engineer's Guide to Motion Compensation - necessary technique on good quality frame rate conversion, white paper from Snell & Wilcox series Rate this link
- The Engineer's Guide to Standards Conversion - white paper Rate this link
- What The Heck Is 3:2 Pulldown? Rate this link
- Video Switchers/Scalers Defined - Plagued with flickers, flashes and rolls when switching between video sources? This article discusses the causes and solutions. Rate this link
Video standards conversion
When it comes to converting TV systems, this is costly.A descent TV standards converter is not cheap or easy to make.Converting a TV standard to another (for example NTSC-PAL)conversion involves changing of color coding, field rate andresolution. The PAL to NTSC and NTSC to PAL converters boxes generally work in a waythat they take in composite video signal and output composite videosignal. Some commercial boxes also have other video signal connection options (for example S-video).The first color coding onversion is quiteeasy, but doing the latter ones (field rate and resolution)properly is complicated if you want to do it well.Conversion from NTSC to real PAL (or other way around)may produce some stuttering in the image.Some simplest commercial NTSC to PAL converter jsut convert the color coding and leave all other vidoe signal thing (number of scanlines, signal timing, field rate etc.) as they are in NTSC signal. This kind of converted signal is not standard PAL signal, but something which is usually called "pseudo PAL" or "PAL60". Most modern PAL TVs can show this signal nicely on the screen with colors, but you can't for example record this signal to a PAL VCR.
- How film is transferred to video - The goal of this article is to make the reader understand how a movie is shot and later transferred to home video. Rate this link
- How Video Formatting Works - If you've watched many movies on video, you've probably read the words, "This film has been modified from its original version." But how has it been modified? The message that appears at the beginning of video tapes isn't very specific. As it turns out, there are a number of ways video producers modify theatrical films for video release, and elements of these processes have sparked heated debates about maintaining artistic visions. Rate this link
- Letterbox and Widescreen Advocacy Page - This page describes the difference of letterbox and widescreen picture formats. Rate this link
- Telecining - Telecining is a process by which video that runs at 24 frames per second (fps) is converted to run at ~30 fps or 25 fps. The telecining process is used on many types of video, such as films, most cartoons, and many other kinds of programs. Rate this link
- What Is 3:2 Pulldown? - Film runs at a rate of 24 frames per second, and video that adheres to the NTSC television standard runs at 30 frames a second. When film is converted to NTSC video, the frame rates need to be matched in order for the film-to-video conversion to take place. This process of matching the film frame rate to the video frame rate is called 3:2 pulldown. Rate this link
Film to video conversion
Telecining is a method by which progressive video that runs at 24 fps (such as a film) is converted to a format which can be displayed on a TV.The word "telecine" is derived from the words television and cinema. Telecining is a process by which video that runs at 24 frames per second (fps) is converted to run at ~30 fps or 25 fps. This is necessary because a television can only display video at ~30 fps (in NTSC based countries) or 25 fps (in PAL/SECAM based countries). The telecining process is used on many types of video, such as films, most cartoons, and many other kinds of programs. Telecine is a device used for scanning photographic motion-picture images and transcoding them into video images in one of the standardized video formats. Its most common usage is to prepare videotape transfers from completed film programs. Film scanner is a more general term and telecine is frequently reserved for a scanner that operates only in real-time. In addition to scanning the film images, telecines must reconcile the speed and frame count differences between various film and video formats.NTSC telecining is usually done using process called 3:2 pulldown. In this process one film frame is shown for two TV field, next film frame for three TV fields, next for two TV fields etc. This process work well, looks quite nice on TV and is technically quite simple.In general, PAL/SECAM telecining does not use duplicated fields. Instead, most 24 fps films are simply sped up by 4% to play at 25 fps on a PAL/SECAM system. In most cases, one one film frame makes one TV picture frame, but in some cases the conversion from 24 fps to 25 fps, one field of video is shifted by a frame (this looks nice on TV but can cause problem when video signal is processed further). In very rare cases a 24 fps video is converted to 25 fps for PAL/SECAM video by duplicating 2 fields over 24 frames to produce 25 frames.In addition to the different frame rate and interlacing, TV display and movies use different aspect ratios. The normal TV screen has 4:3 aspect ratio, where practically all movies (expect some very old ones) use a much wider picture format. Panning & Scanning is the oldest and most used method of converting a widescreen image to fit an old-fashioned 4:3 TV screen. In this way of transferring images, the resolution is kept as high as it can be, but at the cost of missing area (you see only the part of the picture the movie theater audience sees). Another way to do the aspect ratio conversion is letterboxing. In this conversion the whole movie screen is shown in the TV screen, but it fills only part of the 4:3 TV screen. The unused parts (above and below movie image) are filled withblack video signal. This can be viewed with normal 4:3 TV nicely (although picture can look somewhat small) and looks goodwith 16:9 TV also (with "Zoom" feature the 16:9 TV ownet can get the whole screen filled with picture). The problem inherent with letterboxing is that lots of perfectly good video resolution is lost in the black lines. However, this has now changed. With DVD taking larger market on all over the world, HDTV coming to USA and digital TV coming to Europe, there is a new video format that can be used. This video mode has many names, like "16:9 Enhanced Widescreen", "Anamorphic widescreen" or simply "16:9". The point is that instead of optimizing the video image for a 4:3 TV set, it is optimized for a 16:9 TV set. For example all 20" or bigger 4:3 TV sets sold in the European Union since 1995 or so have an "anamorphic squeeze" to be able to take this new format. Also many digital playback devices (DCD players, digital TV set-top boxed) have options to set if this signal format is used or not. If you try to push "16:9 Enhanced Widescreen" signal to a TV which does not support it, you will get a picture thatis around 33% too tall.Playing a telecined video on a television will usually look okay. However, problems can arise when capturing and playing a telecined video on a computer. For NTSC telecining, interlacing artifacts are caused by duplicate fields of video that are used to increase the number of frames displayed each second. Generally telecined PAL signal does not cause problems in computer processing, but is some cases where one field of video is shifted by a frame, you get a video signal which looks okay on an interlaced display (television), but it produces interlacing artifacts on a computer monitor and when video is digitized.
- Video Switches route analog signals along paths of least resistance - analog video is very much alive and well despite the advance of digital video, properly using wideband, low-distortion switches and drivers, you can guide signals to their intended destinations and various switching configurations provide flexibility in routing as well as path control Rate this link
- What is SSTV? - information about Slow Scan Television (SSTV) used by Radio Amateurs Rate this link
Other related information
Related pages
<[email protected]>
Back to ePanorama main page ??